Scientists Call for a Positive Vision to Steer AI Toward the Public Good
The article āScientists Need a Positive Vision for AIā published by IEEE Spectrum is a rallying cry for researchers, engineers and policyāmakers to move beyond doomāandāgloom narratives about artificial intelligence (AI), and instead carve out a clear path for the technology to benefit society. (IEEE Spectrum)
The Stakes Are High
AI is no longer just a tool. According to the authors ā Bruce Schneier and Nathan E. Sanders ā sophisticated AI systems are proliferating in a world already battling rising authoritarianism, environmental stress and rampant misinformation. (IEEE Spectrum) They outline key risks:
- Proliferation of AIāgenerated āslopā in media, deepfakes, and extremist messaging. (IEEE Spectrum)
- Exploitation of workers in the global South (for data labelling) and of creators whose work is used without compensation. (IEEE Spectrum)
- Huge energy footprint associated with AI model training and deployment. (IEEE Spectrum)
- A narrowing of the scientific agenda: public investment flowing into AI at the expense of other fields; consolidation by big tech companies. (IEEE Spectrum)
Given this, the authors warn that if scientists see AI only as a ālost cause,ā they may disengage ā leaving the direction of the technology to the least accountable actors. (IEEE Spectrum)
A Vision for Whatās Possible
But they donāt stop there. Schneier and Sanders argue that scientists do have the power to shape AIās future ā provided they adopt a positive vision and act on it. Here are the highālevel elements they suggest:
- Celebrate and scale positive applications of AI: e.g., bridging languageābarriers for marginalized sign languages and indigenous African languages. (IEEE Spectrum)
- Use AI to strengthen democratic processes: including scaling individual dialogues, supporting civic deliberation, and accelerating scientific discovery. (IEEE Spectrum)
- Engage as scientists and engineers in reforming the structures of AI development ā urging ethical norms, resisting harmful uses, and advocating institutional change (in universities, professional societies, democratic organisations). (IEEE Spectrum)
They pull from their new book Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship, where they lay out four key actions for policyāmakers ā and extend those to scientists and technologists as well. (IEEE Spectrum)
Why the Research Community Must Engage
The authors draw attention to a meaningful divide: While a substantial portion of AI conference authors believe in the positive effects of AI, broader scientific communities report far more concern than optimism. In one survey, negative sentiment toward generativeāAI usage outweighed excitement nearly 3:1. (IEEE Spectrum) If this critical community disengages, the result may be that fewer people are shaping AI development with public interest in mind ā instead leaving it to corporate or geopolitical actors. The authors urge scientists not to sit out but to lean in. (IEEE Spectrum)
Implications for the Field
For you ā whether youāre working in AI, data science, policy or engineering (as I know you are, Sheng) ā this article sends a clear message:
- Donāt treat AI ethics as an afterthought. Proactively embedding positive visions and publicāgood incentives in your projects matters.
- Engage institutionally. Whether itās your company, your university or your professional society ā the norms and incentives built into the system will determine how AI evolves.
- Bridge optimism and realism. Hope doesnāt mean ignoring harm; it means recognising risk and possibility, and acting accordingly.
- Your voice matters. With your background in AI/data science leadership, youāre wellāplaced to champion these issues ā to influence design decisions, resource flows, and the trajectory of projects.
In short: the future of AI isnāt predetermined, but it will be shaped by the choices we make today. And according to Schneier and Sanders, scientists and engineers must choose to lead in building it.
Glossary
- Generative AI: Artificial intelligence systems (like large language models) that can generate new content (text, images, audio) rather than just analyse or classify existing data.
- Deepfake: Media (video, audio, image) generated or altered using AI so as to convincingly mimic real peopleās likeness or voice ā often used maliciously.
- Foundation model: A large-scale AI model (e.g., large language model) trained on broad data and then adapted (fineātuned) for many downstream tasks.
- Public good: Something that benefits society broadly (rather than private interests) and typically requires collaboration, regulation or institutional support to realise.
Source link: Scientists Need a Positive Vision for AI ā IEEE Spectrum